3,488 research outputs found

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Bridging the Gap between Probabilistic and Deterministic Models: A Simulation Study on a Variational Bayes Predictive Coding Recurrent Neural Network Model

    Full text link
    The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.Comment: This paper is accepted the 24th International Conference On Neural Information Processing (ICONIP 2017). The previous submission to arXiv is replaced by this version because there was an error in Equation

    Evidence for surprise minimization over value maximization in choice behavior

    Get PDF
    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations

    Low-Latency Rendering With Dataflow Architectures

    Get PDF
    Recent years have seen a resurgence of virtual reality (VR), sparked by the repurposing of low-cost COTS components. VR aims to generate stimuli that appear to come from a source other than the interface through which they are delivered. The synthetic stimuli replace real-world stimuli, and transport the user to another, perhaps imaginary, “place.” To do this, we must overcome many challenges, often related to matching the synthetic stimuli to the expectations and behavior of the real world. One way in which the stimuli can fail is its latency–– the time between a user's action and the computer's response. We constructed a novel VR renderer, that optimized latency above all else. Our prototype allowed us to explore how latency affects human–computer interaction. We had to completely reconsider the interaction between time, space, and synchronization on displays and in the traditional graphics pipeline. Using a specialized architecture––dataflow computing––we combined consumer, industrial, and prototype components to create an integrated 1:1 room-scale VR system with a latency of under 3 ms. While this was prototype hardware, the considerations in achieving this performance inform the design of future VR pipelines, and our human factors studies have provided new and sometimes surprising contributions to the body of knowledge on latency in HCI

    A probabilistic interpretation of PID controllers using active inference

    Get PDF
    In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition within general mathematical frameworks derived from information and control theory, statistical physics and machine learning. The connections between information and control theory have been discussed since the 1950’s by scientists like Shannon and Kalman and have recently risen to prominence in modern stochastic optimal control theory. However, the implications of the confluence of these two theoretical frameworks for the biological sciences have been slow to emerge. Here we argue that if the active inference proposal is to be taken as a general process theory for biological systems, we need to consider how existing control theoretical approaches to biological systems relate to it. In this work we will focus on PID (Proportional-Integral-Derivative) controllers, one of the most common types of regulators employed in engineering and more recently used to explain behaviour in biological systems, e.g. chemotaxis in bacteria and amoebae or robust adaptation in biochemical networks. Using active inference, we derive a probabilistic interpretation of PID controllers, showing how they can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation under simple linear generative models.most common types of regulators employed in engineering and more recently used to explain behaviour in biological systems, e.g. chemotaxis in bacteria and amoebae or robust adaptation in biochemical networks. Using active inference, we derive a probabilistic interpretation of PID controllers, showing how they can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation under simple linear generative models

    The Effects of Low Latency on Pointing and Steering Tasks

    Get PDF
    Latency is detrimental to interactive systems, especially pseudo-physical systems that emulate real-world behaviour. It prevents users from making quick corrections to their movement, and causes their experience to deviate from their expectations. Latency is a result of the processing and transport delays inherent in current computer systems. As such, while a number of studies have hypothesized that any latency will have a degrading effect, few have been able to test this for latencies less than ~50 ms. In this study we investigate the effects of latency on pointing and steering tasks. We design an apparatus with a latency lower than typical interactive systems, using it to perform interaction tasks based on Fitts’s law and the Steering law. We find evidence that latency begins to affect performance at ~16 ms, and that the effect is non-linear. Further, we find latency does not affect the various components of an aiming motion equally. We propose a three stage characterisation of pointing movements with each stage affected independently by latency. We suggest that understanding how users execute movement is essential for studying latency at low levels, as high level metrics such as total movement time may be misleading

    The AR-Rift 2 Prototype

    Get PDF
    Video see-through augmented reality (VSAR) is an effective way of combing real and virtual scenes for head-mounted human computer interfaces. In this paper we present the AR-Rift 2 system, a cost-effective prototype VSAR system based around the Oculus Rift CV1 head-mounted display (HMD). Current consumer camera systems however typically have latencies far higher than the rendering pipeline of current consumer HMDs. They also have lower update rate than the display. We thus measure the latency of the video and implement a simple image-warping method to ensure smooth movement of the video

    The increase of the functional entropy of the human brain with age

    Get PDF
    We use entropy to characterize intrinsic ageing properties of the human brain. Analysis of fMRI data from a large dataset of individuals, using resting state BOLD signals, demonstrated that a functional entropy associated with brain activity increases with age. During an average lifespan, the entropy, which was calculated from a population of individuals, increased by approximately 0.1 bits, due to correlations in BOLD activity becoming more widely distributed. We attribute this to the number of excitatory neurons and the excitatory conductance decreasing with age. Incorporating these properties into a computational model leads to quantitatively similar results to the fMRI data. Our dataset involved males and females and we found significant differences between them. The entropy of males at birth was lower than that of females. However, the entropies of the two sexes increase at different rates, and intersect at approximately 50 years; after this age, males have a larger entropy

    Reinforcement learning or active inference?

    Get PDF
    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain

    Comparing families of dynamic causal models

    Get PDF
    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data
    corecore